Digital Privacy
08 Oct 2024 Mariano delli Santi
The ICO is leaving an AI enforcement gap in the UK
In response to our formal complaint to the ICO against Meta’s use of personal data to train Artificial Intelligence models without consent, the ICO has invited Open Rights Group (ORG) to a meeting to discuss our concerns. This invite comes, however, after Meta has resumed training its AI models without the consent of Facebook and Instagram users… in the UK. Microsoft also have announced—and, for the time being, paused—their plans to use LinkedIn data to train AI models… in the UK only.
The AI data grab
LinkedIn and Meta’s decisions are the latest of a long list of online platforms that have stirred controversies by using users’ data to train AI products.
So far, the Information Commissioner’s Office (ICO) has shown a tolerant attitude toward AI companies’ data grabs, making an unwelcome contrast with the assertiveness shown by European Data Protection Authorities (DPAs). In the EU, Meta was forced to suspend its plans to train AI on users’ data. The social media platform X had to sign an undertaking to suspend the use of EU personal for its AI model to avoid legal action. Quite tellingly, LinkedIn did not even try to pursue their plans in Europe, but seem quite confident that they will be permitted to go ahead in the UK.
None of these developments are good news for UK residents and the country at large. As we live in a digital society, we entrust a growing amount of organisations and online providers with our data, either by providing them voluntarily or by generating them via the use of online services. Indeed, the US Federal Trade Commission recently found that “Social media users [in the US] lack control over data used by AI”. In the EU and the UK, instead, data protection law provide much needed legal boundaries and protections against unfair or discriminatory data uses, as well as safeguards that protect individuals’ right to choose and to control how their data is used.
By sending the message that the use of UK people’s data is fair game, the ICO is leaving one-sided commercial exploitation unchecked, at the expense of the information rights of the British public. Lack of enforcement on consent requirements will only benefit large and vertically integrated companies, at the expense of competition, new companies and law-abiding businesses. In the long term, lack of data protection enforcement in the field of AI is bound to favour irresponsible and unsafe business practices, dysfunctional markets dynamics, and a digital economy where the benefits of innovation are not shared.
The root causes that have led to a permissive approach from the ICO can be traced, at least partially, to its underlying legal framework: the public currently lacks a clear redress avenue to challenge when the ICO decides not to take action upon their complaints, or to use its enforcement powers to address an infringement of information rights. In turn, this has given a quasi-unfettered discretion to the ICO, which was used to develop a regulatory approach which significantly diverges from the one written in its statutes. With the Digital Information and Smart Data Bill, the new Labour Government has a golden opportunity to clearly state in legislation that the ICO have a duty of investigating complaints, and to allow the Information Tribunal to scrutinise the appropriateness of the Commissioner’s response to a complaint. But will they?
Bear with us: in this unusual long form, ORG takes a deep dive into these issues.
The story so far and the enforcement gap between the EU and the UK
It was Meta who first announced that they would reuse personal data held at Facebook and Instagram to train their new AI technologies. In response, EU privacy rights group None of Your Business (NOYB) lodged several complaints in the European Union, asking the Irish Data Protection Commission to stop Meta’s changes to its privacy policy. Pressured by EU DPAs, Meta paused its plans both in the EU and the UK, after which ORG joined NOYB and filed a representative complaint also in the UK. In particular, we emphasised that there was no legally binding measure that would have prevented Meta from resuming their illegal data processing operations, and thus urged the ICO to adopt urgent measures.
Unfortunately, our fears have materialised: on 13 September, Meta and the ICO announced, respectively, that Meta would resume “consent-less” use of UK data to train its AI, and the ICO would not stop them. Even more interestingly, the ICO gave its announcement on Friday, a day Meta routinely choses to reduce the effect of these news on its stock price. In the meantime, Facebook and Instagram users who are fortunate enough to be resident in the EU can still use Meta’s services without having to worry about their data and activities being fed into some unspecified AI product without their consent.
Despite the EU and the UK having the same data protection rulebook, Meta’s case became the first of a long list of cases that marked a line between the level of protection afforded to UK residents when compared to the EU. The social media company X quietly started training its AI on users’ data, but was swiftly brought to Court by the Irish DPC. Eventually, X had to sign an undertaking to suspend any data use to train AI until a regulatory investigations is concluded. However, in the UK, the ICO has been silent.
Finally, LinkedIn also started using users’ data to train AI—this time, however, only in the UK and rest of world. LinkedIn has now put its plans on hold, a decision that was welcomed by the ICO in a reaction statement. Unfortunately, there is little reason to celebrate. As we have seen in the Meta’s case, this suspension is likely to be temporary, and we don’t expect the ICO to stop LinkedIn from resuming their plans—after all, they have allowed Meta to do the same.
In any case, the Meta, X and LinkedIn cases all suggest that AI companies see UK data as fair game: in contrast to Data Protection Authorities in the EU, the ICO will only take “light touch” regulatory actions and will ultimately let them proceed.
Force-fed innovation is bad innovation
Central to Meta, X and LinkedIn’s plans, is the claim in their privacy policies that the commercial deployment of their AI products constitutes a “legitimate interest” that prevails over the rights and freedom of its users, thus exempting them from the requirement to obtain free and informed consent. The law, however, does not give legitimacy to this position: a legitimate interest can justify data uses only if it passes a so-called “balancing test”—in other words, if it proves sufficiently strong or uncontroversial to override the risks for the individuals whose data is being used.
Commercialising an AI product is, however, a one-sided interest that bears benefits, profits, only to the private purses of the AI companies themselves—thus, it’s a rather weak “legitimate interest” to begin with. On the other hand, AI is a highly experimental, untested an ultimately unsafe technology, and thus exposes individuals to significant risks—indeed, risk mitigation is at the heart of the new Irish DPA probe into Google AI products. Quite obviously, a weak and ultimately self-serving interest cannot override the rights and freedom of millions of individuals, nor has any of these companies demonstrated that they carried out any such assessment.
Further proof that our rights cannot be trumped by such a weak commercial interests can be found not only in legal books, but with some common sense. Nowadays, we routinely rely on a large number of online platforms for enjoying our lives—online banking, social media, ride-hailing and delivery services, dating apps, and the list goes on. The public cannot be expected to monitor and chase every single online company that decides to repurpose our data to train AI or otherwise pursue their petty interests. The legitimate interest legal basis obviously requires to attain a higher threshold to be relied upon, or else it would frustrate the scope of data protection rights—to give people control over how their data is used.
AI companies’ response to the above would probably go along the line that their AI products would serve the interest of humanity, foster innovation, or at least “improve services for consumers”. While the wishful thinking of AI companies bear no legal value, it is worth noticing that people are usually keen to give their consent for a product or service that serves their needs. Relentless attempts to bypass users’ consent and self-determination are the strongest evidence that the AI companies themselves do not believe that their “innovations” would bring actual value to their users and thus convince them to adopt these products voluntarily. Needless to say, forcing customers to adopt a commercial product is not a legitimate interest, nor an aim that the law should protect.
Circumventing consent requirements harms competition
The ICO’s apparent reluctance to enforce GDPR consent requirements in the UK isn’t only affecting the rights of the British public—it is also damaging AI market dynamics and enabling market power concentration, to the detriment of competition and the UK business environment.
AI is a computer programme that works by analysing large amounts of data and identifying patterns, which it later replicates according to the instructions it was given. This makes access to data an important factor to compete in the AI market. Indeed, “data frontier” is a common term the industry uses to describe the lack of new data to train their products. On top of that, the industry is reckoning with significant backlash that the unhinged exploitation of publicly available data has generated, notably in breach of copyright protection and, more recently, of privacy protections. As a result, AI companies have started to seek licensing agreements to gain lawful access to either copyright-protected data or to data which is otherwise not publicly accessible.
In other words, data is a finite resource, and a rise in its demand is only bound to increase the value of data access over time. Against this background, allowing online platforms to bypass consent requirements and freely use their users’ data to train AI products introduces a significant distortion in the AI data market, as it allows large and vertically integrated companies—such as Meta, X and Microsoft, which owns LinkedIn and, de facto, controls OpenAI—to get a significant and unfair advantage against new entrants, smaller companies, and anyone who respects rather than trump our autonomy.
However, none of these outcomes are compatible with the “Growth Duty”, which the ICO often uses to justify its caution regarding enforcement: as the recently refreshed Statutory Guidance to the “Growth Duty” states:
”The Growth Duty does not legitimise non-compliance with other duties or objectives, and its purpose is not to achieve or pursue economic growth at the expense of necessary protections. Non-compliant activity or behaviour […] also harms the interests of legitimate businesses that are working to comply with regulatory requirements, disrupting competition and acting as a disincentive to invest in compliance.”
At the root of the ICO’s reluctance
ORG is about to publish a report that explores the ICO’s surprising reluctance to enforce the law. For the time being, we would like to draw attention to the underlying incentives of the ICO’s legal structure, which in part explain the state of information rights enforcement in the UK.
As a starting point, UK data protection law provides a clear avenue for companies and organisations that want to challenge an enforcement decisions by the ICO. The same, however, isn’t true for those members of the public whose rights are being violated, and whose complaints are being ignored by the ICO: case-law from the Information Tribunal and the Administrative Court have given the ICO a very wide discretion to discharge its regulatory functions as it sees fit. These rulings have enabled the ICO to develop an enforcement policy that primarily fits the ICO’s own interests rather than those of the British public—in stark contrast with the EU, where the Court of Justice has recognised that the principal duty of a Data Protection Authority is the “responsibility for ensuring that the GDPR is fully enforced with all due diligence”.
Indeed, the ICO approach to AI enforcement is revealing of a delicate balancing exercise. The ICO has so far avoided taking enforcement actions, thus also avoiding vexatious appeals by AI companies that could deploy vast resources to “throw lawyers” at them. At the same time, the ICO has not given “regulatory approval” to the use of data for AI training, thus avoiding formalising a divergent interpretation of “legitimate interest” that could be found to be in contradiction with the GDPR, and frustrate the new Labour Government and their intention to pursue closer cooperation with the EU. As a result, rules surrounding the use of data for training AI are left in a grey area, which favours bad actors willing to push the boundaries beyond what is reasonable.
The ICO has written a response letter to our complaint, inviting us to a meeting to give ORG an opportunity to “discuss our concerns”. However, without signalling that the ICO intends to take action to ensure the law is upheld, the ICO risks merely managing stakeholders rather than regulating and the enforcing information rights. This would be very parallel to our experience with the ICO and Adtech, where years of ICO recognition of problems have resulted in no actual change.
An opportunity to fix the ICO: is Labour ready to seize it?
In a nutshell, the ICO approach to enforcement in the AI field gives us a worrying picture of a regulator which has institutional incentives to avoid stern enforcement that it is tasked with. Such an approach from the ICO will fail to protect the information rights of million of British social media users and, in doing so, would contributing a culture of irresponsible AI innovation. It has already created an enforcement gap with the EU, which shares the same rules as in the UK, but where the public are enjoying a much higher level of protection to their personal data. The ICO has so far not upheld its Growth Duty to protect the UK domestic market and law-abiding businesses from unfair competition coming from large technology companies that are operating outside of the law.
This approach is bound to worsen matters if continued, as the Government considers the adoption of AI and other cutting-edge technologies to support public service delivery. Effective oversight and enforcement is of phenomenal importance to ensure technological development is stirred by the right incentives and the market evolves in a way that ensures we can all reap the benefits of innovation.
The Digital Information and Smart Data Bill represents a golden opportunity to take the first steps in this direction. The upcoming reform of the ICO should, at a bare minimum,
- Clearly state in legislation that the ICO have a duty of investigating infringements and ensuring the diligent application of data protection rules, and
- Extend the scope of Section 166 of the Data Protection Act to allow the Information Tribunal to scrutinise the appropriateness of the ICO response to a complaint.
As we have outlined in our previous work, other measures can also be taken to ensure the ICO is independent, objective, and fit for purpose.
Fundamental rights protection is the common thread that brings a healthy society, an inclusive economy, and sustainable growth together: Open Rights Group will keep fighting the rich and powerful corporate interests that want to remove our rights for their private gains.
The DATA watchdog Must Toughen Up
ORG calls for the need for a strong and independent data protection authority
Find out moreRead more about the Hands Off Our Data campaign